Goto

Collaborating Authors

 link function





Statistical-ComputationalTradeoffs inHigh-DimensionalSingleIndex Models

Neural Information Processing Systems

We study the statistical-computational tradeoffs in a high dimensional single index modelY = f(X>β)+, where f is unknown,X is a Gaussian vector and β is s-sparse with unit norm. WhenCov(Y,X>β) 6= 0, [43] shows that the direction and support ofβ can be recovered using a generalized version of Lasso.


13d4635deccc230c944e4ff6e03404b5-AuthorFeedback.pdf

Neural Information Processing Systems

We appreciate the valuable comments from reviewers on paper presentation and typos. We will revise our work1 accordingly. Compared with these related work, we consider alarger model class.




2063a00c435aafbcc58c16ce1e522139-Paper-Conference.pdf

Neural Information Processing Systems

Amongst those functions, the simplest are single-index modelsf(x) = ϕ(x θ), where the labels are generated by an arbitrary non-linear scalar link functionϕ applied to an unknown one-dimensional projectionθ of the input data.



Riesz Representer Fitting under Bregman Divergence: A Unified Framework for Debiased Machine Learning

Kato, Masahiro

arXiv.org Machine Learning

Estimating the Riesz representer is central to debiased machine learning for causal and structural parameter estimation. We propose generalized Riesz regression, a unified framework that estimates the Riesz representer by fitting a representer model via Bregman divergence minimization. This framework includes the squared loss and the Kullback--Leibler (KL) divergence as special cases: the former recovers Riesz regression, while the latter recovers tailored loss minimization. Under suitable model specifications, the dual problems correspond to covariate balancing, which we call automatic covariate balancing. Moreover, under the same specifications, outcome averages weighted by the estimated Riesz representer satisfy Neyman orthogonality even without estimating the regression function, a property we call automatic Neyman orthogonalization. This property not only reduces the estimation error of Neyman orthogonal scores but also clarifies a key distinction between debiased machine learning and targeted maximum likelihood estimation. Our framework can also be viewed as a generalization of density ratio fitting under Bregman divergences to Riesz representer estimation, and it applies beyond density ratio estimation. We provide convergence analyses for both reproducing kernel Hilbert space (RKHS) and neural network model classes. A Python package for generalized Riesz regression is available at https://github.com/MasaKat0/grr.